Skip to main content

Home/ TOK Friends/ Group items tagged the internet

Rss Feed Group items tagged

23More

How Calls for Privacy May Upend Business for Facebook and Google - The New York Times - 0 views

  • People detailed their interests and obsessions on Facebook and Google, generating a river of data that could be collected and harnessed for advertising. The companies became very rich. Users seemed happy. Privacy was deemed obsolete, like bloodletting and milkmen
  • It has been many months of allegations and arguments that the internet in general and social media in particular are pulling society down instead of lifting it up.
  • That has inspired a good deal of debate about more restrictive futures for Facebook and Google. At the furthest extreme, some dream of the companies becoming public utilities.
  • ...20 more annotations...
  • There are other avenues still, said Jascha Kaykas-Wolff, the chief marketing officer of Mozilla, the nonprofit organization behind the popular Firefox browser, including advertisers and large tech platforms collecting vastly less user data and still effectively customizing ads to consumers.
  • The greatest likelihood is that the internet companies, frightened by the tumult, will accept a few more rules and work a little harder for transparency.
  • The Cambridge Analytica case, said Vera Jourova, the European Union commissioner for justice, consumers and gender equality, was not just a breach of private data. “This is much more serious, because here we witness the threat to democracy, to democratic plurality,” she said.
  • Although many people had a general understanding that free online services used their personal details to customize the ads they saw, the latest controversy starkly exposed the machinery.
  • Consumers’ seemingly benign activities — their likes — could be used to covertly categorize and influence their behavior. And not just by unknown third parties. Facebook itself has worked directly with presidential campaigns on ad targeting, describing its services in a company case study as “influencing voters.”
  • “If your personal information can help sway elections, which affects everyone’s life and societal well-being, maybe privacy does matter after all.”
  • some trade group executives also warned that any attempt to curb the use of consumer data would put the business model of the ad-supported internet at risk.
  • “You’re undermining a fundamental concept in advertising: reaching consumers who are interested in a particular product,”
  • If suspicion of Facebook and Google is a relatively new feeling in the United States, it has been embedded in Europe for historical and cultural reasons that date back to the Nazi Gestapo, the Soviet occupation of Eastern Europe and the Cold War.
  • “We’re at an inflection point, when the great wave of optimism about tech is giving way to growing alarm,” said Heather Grabbe, director of the Open Society European Policy Institute. “This is the moment when Europeans turn to the state for protection and answers, and are less likely than Americans to rely on the market to sort out imbalances.”
  • In May, the European Union is instituting a comprehensive new privacy law, called the General Data Protection Regulation. The new rules treat personal data as proprietary, owned by an individual, and any use of that data must be accompanied by permission — opting in rather than opting out — after receiving a request written in clear language, not legalese.
  • the protection rules will have more teeth than the current 1995 directive. For example, a company experiencing a data breach involving individuals must notify the data protection authority within 72 hours and would be subject to fines of up to 20 million euros or 4 percent of its annual revenue.
  • “With the new European law, regulators for the first time have real enforcement tools,” said Jeffrey Chester, the executive director of the Center for Digital Democracy, a nonprofit group in Washington. “We now have a way to hold these companies accountable.”
  • Privacy advocates and even some United States regulators have long been concerned about the ability of online services to track consumers and make inferences about their financial status, health concerns and other intimate details to show them behavior-based ads. They warned that such microtargeting could unfairly categorize or exclude certain people.
  • the Do Not Track effort and the privacy bill were both stymied.Industry groups successfully argued that collecting personal details posed no harm to consumers and that efforts to hinder data collection would chill innovation.
  • “If it can be shown that the current situation is actually a market failure and not an individual-company failure, then there’s a case to be made for federal regulation” under certain circumstances
  • The business practices of Facebook and Google were reinforced by the fact that no privacy flap lasted longer than a news cycle or two. Nor did people flee for other services. That convinced the companies that digital privacy was a dead issue.
  • If the current furor dies down without meaningful change, critics worry that the problems might become even more entrenched. When the tech industry follows its natural impulses, it becomes even less transparent.
  • “To know the real interaction between populism and Facebook, you need to give much more access to researchers, not less,” said Paul-Jasper Dittrich, a German research fellow
  • There’s another reason Silicon Valley tends to be reluctant to share information about what it is doing. It believes so deeply in itself that it does not even think there is a need for discussion. The technology world’s remedy for any problem is always more technology
12More

Is That Even a Thing? - The New York Times - 3 views

  • Speakers and writers of American English have recently taken to identifying a staggering and constantly changing array of trends, events, memes, products, lifestyle choices and phenomena of nearly every kind with a single label — a thing.
  • It would be easy to call this a curiosity of the language and leave it at that. Linguistic trends come and go.
  • One could, on the other hand, consider the use of “a thing” a symptom of an entire generation’s linguistic sloth, general inarticulateness and penchant for cutesy, empty, half-ironic formulations that create a self-satisfied barrier preventing any form of genuine engagement with the world around them.
  • ...9 more annotations...
  • My assumption is that language and experience mutually influence each other. Language not only captures experience, it conditions it. It sets expectations for experience and gives shape to it as it happens. What might register as inarticulateness can reflect a different way of understanding and experiencing the world.
  • The word “thing” has of course long played a versatile and generic role in our language, referring both to physical objects and abstract matters. “The thing is …” “Here’s the thing.” “The play’s the thing.” In these examples, “thing” denotes the matter at hand and functions as stage setting to emphasize an important point. One new thing about “a thing,” then, is the typical use of the indefinite article “a” to precede it. We talk about a thing because we are engaged in cataloging. The question is whether something counts as a thing. “A thing” is not just stage setting. Information is conveyed.
  • What information? One definition of “a thing” that suggests itself right away is “cultural phenomenon.” A new app, an item of celebrity gossip, the practices of a subculture. It seems likely that “a thing” comes from the phrase the coolest/newest/latest thing. But now, in a society where everything, even the past, is new — “new thing” verges on the redundant. If they weren’t new they wouldn’t be things.
  • Clearly, cultural phenomena have long existed and been called “fads,” “trends,” “rages” or have been designated by the category they belong to — “product,” “fashion,” “lifestyle,” etc. So why the application of this homogenizing general term to all of them? I think there are four main reasons.
  • First, the flood of content into the cultural sphere. That we are inundated is well known. Information besieges us in waves that thrash us against the shore until we retreat to the solid ground of work or sleep or exercise or actual human interaction, only to wade cautiously back into our smartphones. As we spend more and more time online, it becomes the content of our experience, and in this sense “things” have earned their name. “A thing” has become the basic unit of cultural ontology.
  • Second, the fragmentation of this sphere. The daily barrage of culture requires that we choose a sliver of the whole in order to keep up. Netflix genres like “Understated Romantic Road Trip Movies” make it clear that the individual is becoming his or her own niche market — the converse of the celebrity as brand. We are increasingly a society of brands attuning themselves to markets, and markets evaluating brands. The specificity of the market requires a wider range of content — of things — to satisfy it
  • Third, the closing gap between satire and the real thing. The absurd excess of things has reached a point where the ironic detachment needed to cope with them is increasingly built into the things themselves, their marketing and the language we use to talk about them. The designator “a thing” is thus almost always tinged with ironic detachment. It puts the thing at arm’s length. You can hardly say “a thing” without a wary glint in your eye.
  • Finally, the growing sense that these phenomena are all the same. As we step back from “things,” they recede into the distance and begin to blur together. We call them all by the same name because they are the same at bottom: All are pieces of the Internet. A thing is for the most part experienced through this medium and generated by it. Even if they arise outside it, things owe their existence as things to the Internet. Google is thus always the arbiter of the question, “Is that a real thing?”
  • “A thing,” then, corresponds to a real need we have, to catalog and group together the items of cultural experience, while keeping them at a sufficient distance so that we can at least feign unified consciousness in the face of a world gone to pieces.
7More

The Ugly Honesty of Elon Musk's Twitter Rebrand - The Atlantic - 0 views

  • Sexual desire and frustration, familiar feelings for the outcast teenage nerd, pervade the social internet. S3xy-ness is everywhere. Posts by women are dismayingly likely to produce advances, or threats, from creepers on all platforms; at the same time, sex appeal is a pillar for the influencer economy, or else a viable and even noble way to win financial independence. The internet is for porn, as the song goes.
  • In all these ways, online life today descends from where it started, as a safe harbor for the computer nerds who made it. They were socially awkward, concerned with machines instead of people, and devoted to the fantasy of converting their impotence into power.
  • When that conversion was achieved, and the nerds took over the world, they adopted the bravado of the jocks they once despised. (Zuck-Musk cage match, anyone?) But they didn’t stop being nerds. We, the public, never agreed to adopt their worldview as the basis for political, social, or aesthetic life. We got it nevertheless.
  • ...4 more annotations...
  • I’m kind of tired of pretending that the stench does not exist, as if doing otherwise would be tantamount to expressing prejudice against neurodivergence. This is a bad culture, and it always has been.
  • If the X rebrand disgusts you—if, like me, you’ve been made a little queasy by having the new logo thrust upon your phone via automatic update—that feeling is about more than Musk alone. He has merely surfaced what has been there all along. The internet is magical and empowering. The internet is childish and disgusting.
  • Musk’s obsession with X as a brand, and his childish desire to broadcast that obsession from the rooftops in hoggish, bright pulsations, calls attention to this baggage. It reminds us that the world’s richest man is a computer geek, but one with enormous power instead of none
  • It calls attention to the putrid smell that suffuses the history of the internet
23More

Strange things are taking place - at the same time - 0 views

  • In February 1973, Dr. Bernard Beitman found himself hunched over a kitchen sink in an old Victorian house in San Francisco, choking uncontrollably. He wasn’t eating or drinking, so there was nothing to cough up, and yet for several minutes he couldn’t catch his breath or swallow.The next day his brother called to tell him that 3,000 miles away, in Wilmington, Del., their father had died. He had bled into his throat, choking on his own blood at the same time as Beitman’s mysterious episode.
  • Overcome with awe and emotion, Beitman became fascinated with what he calls meaningful coincidences. After becoming a professor of psychiatry at the University of Missouri-Columbia, he published several papers and two books on the subject and started a nonprofit, the Coincidence Project, to encourage people to share their coincidence stories.
  • “What I look for as a scientist and a spiritual seeker are the patterns that lead to meaningful coincidences,” said Beitman, 80, from his home in Charlottesville, Va. “So many people are reporting this kind of experience. Understanding how it happens is part of the fun.”
  • ...20 more annotations...
  • Beitman defines a coincidence as “two events coming together with apparently no causal explanation.” They can be life-changing, like his experience with his father, or comforting, such as when a loved one’s favorite song comes on the radio just when you are missing them most.
  • Although Beitman has long been fascinated by coincidences, it wasn’t until the end of his academic career that he was able to study them in earnest. (Before then, his research primarily focused on the relationship between chest pain and panic disorder.)
  • He started by developing the Weird Coincidence Survey in 2006 to assess what types of coincidences are most commonly observed, what personality types are most correlated with noticing them and how most people explain them. About 3,000 people have completed the survey so far.
  • he has drawn a few conclusions. The most commonly reported coincidences are associated withmass media: A person thinks of an idea and then hears or sees it on TV, the radio or the internet. Thinking of someone and then having that person call unexpectedly is next on the list, followed by being in the right place at the right time to advance one’s work, career or education.
  • People who describe themselves as spiritual or religious report noticing more meaningful coincidences than those who do not, and people are more likely to experience coincidences when they are in a heightened emotional state — perhaps under stress or grieving.
  • The most popular explanation among survey respondents for mysterious coincidences: God or fate. The second explanation: randomness. The third is that our minds are connected to one another. The fourth is that our minds are connected to the environment.
  • “Some say God, some say universe, some say random and I say ‘Yes,’ ” he said. “People want things to be black and white, yes or no, but I say there is mystery.”
  • He’s particularly interested in what he’s dubbed “simulpathity”: feeling a loved one’s pain at a distance, as he believes he did with his father. Science can’t currently explain how it might occur, but in his books he offers some nontraditional ideas, such as the existence of “the psychosphere,” a kind of mental atmosphere through which information and energy can travel between two people who are emotionally close though physically distant.
  • In his new book published in September, “Meaningful Coincidences: How and Why Synchronicity and Serendipity Happen,” he shares the story of a young man who intended to end his life by the shore of an isolated lake. While he sat crying in his car, another car pulled up and his brother got out. When the young man asked for an explanation, the brother said he didn’t know why he got in the car, where he was going, or what he would do when he got there. He just knew he needed to get in the car and drive.
  • David Hand, a British statistician and author of the 2014 book “The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day,” sits at the opposite end of the spectrum from Beitman. He says most coincidences are fairly easy to explain, and he specializes in demystifying even the strangest ones.
  • “When you look closely at a coincidence, you can often discover the chance of it happening is not as small as you think,” he said. “It’s perhaps not a 1-in-a-billion chance, but in fact a 1-in-a-hundred chance, and yeah, you would expect that would happen quite often.”
  • the law of truly large numbers. “You take something that has a very small chance of happening and you give it lots and lots and lots of opportunities to happen,” he said. “Then the overall probability becomes big.”
  • But just because Hand has a mathematical perspective doesn’t mean he finds coincidences boring. “It’s like looking at a rainbow,” he said. “Just because I understand the physics behind it doesn’t make it any the less wonderful.
  • Paying attention to coincidences, Osman and Johansen say, is an essential part of how humans make sense of the world. We rely constantly on our understanding of cause and effect to survive.
  • “Coincidences are often associated with something mystical or supernatural, but if you look under the hood, noticing coincidences is what humans do all the time,”
  • Zeltzer has spent 50 years studying the writings of Carl Jung, the 20th century Swiss psychologist who introduced the modern Western world to the idea of synchronicity. Jung defined synchronicity as “the coincidence in time of two or more causally unrelated events which have the same meaning.”
  • One of Jung’s most iconic synchronistic stories concerned a patient who he felt had become so stuck in her rationality that it interfered with her ability to understand her psychology and emotional life.
  • One day, the patient was recounting a dream in which she’d received a golden scarab. Just then, Jung heard a gentle tapping at the window. He opened the window and a scarab-like beetle flew into the room. Jung plucked the insect out of the air and presented it to his patient. “Here is your scarab,” he said.The experience proved therapeutic because it demonstrated to Jung’s patient that the world is not always rational, leading her to break her own identification with rationality and thus become more open to her emotional life, Zeltzer explained
  • Like Jung, Zeltzer believes meaningful coincidences can encourage people to acknowledge the irrational and mysterious. “We have a fantasy that there is always an answer, and that we should know everything,”
  • Honestly, I’m not sure what to believe, but I’m not sure it matters. Like Beitman, my attitude is “Yes.”
29More

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
23More

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 1 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • are now using its existence as a pretext to dismiss accurate information
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
28More

How Joe Biden's Digital Team Tamed the MAGA Internet - The New York Times - 1 views

  • it’s worth looking under the hood of the Biden digital strategy to see what future campaigns might learn from it.
  • while the internet alone didn’t get Mr. Biden elected, a few key decisions helped his chances.
  • 1. Lean On Influencers and Validators
  • ...25 more annotations...
  • In the early days of his campaign, Mr. Biden’s team envisioned setting up its own digital media empire. It posted videos to his official YouTube channel, conducted virtual forums and even set up a podcast hosted by Mr. Biden, “Here’s the Deal.”
  • those efforts were marred by technical glitches and lukewarm receptions, and they never came close to rivaling the reach of Mr. Trump’s social media machine.
  • So the campaign pivoted to a different strategy, which involved expanding Mr. Biden’s reach by working with social media influencers and “validators,
  • Perhaps the campaign’s most unlikely validator was Fox News. Headlines from the outlet that reflected well on Mr. Biden were relatively rare, but the campaign’s tests showed that they were more persuasive to on-the-fence voters than headlines from other outlets
  • the “Rebel Alliance,” a jokey nod to Mr. Parscale’s “Death Star,” and it eventually grew to include the proprietors of pages like Occupy Democrats, Call to Activism, The Other 98 Percent and Being Liberal.
  • 2. Tune Out Twitter, and Focus on ‘Facebook Moms’
  • “The whole Biden campaign ethos was ‘Twitter isn’t real life,’” Mr. Flaherty said. “There are risks of running a campaign that is too hyper-aware of your own ideological corner.”
  • As it focused on Facebook, the Biden campaign paid extra attention to “Facebook moms” — women who spend a lot of time sharing cute and uplifting content
  • “Our goal was really to meet people where they were,”
  • 3. Build a Facebook Brain Trust
  • “When people saw a Fox News headline endorsing Joe Biden, it made them stop scrolling and think.”
  • Ultimately, he said, the campaign’s entire digital strategy — the Malarkey Factory, the TikTok creators and Facebook moms, the Fortnite signs and small-batch creators — was about trying to reach a kinder, gentler version of the internet that it still believed existed.
  • “I had the freedom to go for the jugular,” said Rafael Rivero, a co-founder of Occupy Democrats and Ridin’ With Biden, another big pro-Biden Facebook page.
  • “It was sort of a big, distributed message test,” Mr. Flaherty said of the Rebel Alliance. “If it was popping through Occupy or any of our other partners, we knew there was heat there.”
  • These left-wing pages gave the campaign a bigger Facebook audience than it could have reached on its own. But they also allowed Mr. Biden to keep most of his messaging positive, while still tapping into the anger and outrage many Democratic voters felt.
  • 4. Promote ‘Small-Batch Creators,’ Not Just Slick Commercials
  • the Biden campaign found that traditional political ads — professionally produced, slick-looking 30-second spots — were far less effective than impromptu, behind-the-scenes footage and ads that featured regular voters talking directly into their smartphones or webcams about why they were voting for Mr. Biden.
  • “The things that were realer, more grainy and cheaper to produce were more credible.”
  • In addition to hiring traditional Democratic ad firms, the campaign also teamed up with what it called “small-batch creators” — lesser-known producers and digital creators, some of whom had little experience making political ads
  • 5. Fight Misinformation, but Pick Your Battles
  • The campaign formed an in-house effort to combat these rumors, known as the “Malarkey Factory.” But it picked its battles carefully, using data from voter testing to guide its responses.
  • “The Hunter Biden conversation was many times larger than the Hillary Clinton email conversation, but it really didn’t stick, because people think Joe Biden’s a good guy,”
  • the campaign’s focus on empathy had informed how it treated misinformation: not as a cynical Trump ploy that was swallowed by credulous dupes, but as something that required listening to voters to understand their concerns and worries before fighting back
  • On the messaging app Signal, the page owners formed a group text that became a kind of rapid-response brain trust for the campaign.
  • “We made a decision early that we were going to be authentically Joe Biden online, even when people were saying that was a trap.”
13More

Mark Zuckerberg, Let Me Pay for Facebook - NYTimes.com - 0 views

  • 93 percent of the public believes that “being in control of who can get information about them is important,” and yet the amount of information we generate online has exploded and we seldom know where it all goes.
  • the pop-up and the ad-financed business model. The former is annoying but it’s the latter that is helping destroy the fabric of a rich, pluralistic Internet.
  • Facebook makes about 20 cents per user per month in profit. This is a pitiful sum, especially since the average user spends an impressive 20 hours on Facebook every month, according to the company. This paltry profit margin drives the business model: Internet ads are basically worthless unless they are hyper-targeted based on tracking and extensive profiling of users. This is a bad bargain, especially since two-thirds of American adults don’t want ads that target them based on that tracking and analysis of personal behavior.
  • ...10 more annotations...
  • This way of doing business rewards huge Internet platforms, since ads that are worth so little can support only companies with hundreds of millions of users.
  • Ad-based businesses distort our online interactions. People flock to Internet platforms because they help us connect with one another or the world’s bounty of information — a crucial, valuable function. Yet ad-based financing means that the companies have an interest in manipulating our attention on behalf of advertisers, instead of letting us connect as we wish.
  • Many users think their feed shows everything that their friends post. It doesn’t. Facebook runs its billion-plus users’ newsfeed by a proprietary, ever-changing algorithm that decides what we see. If Facebook didn’t have to control the feed to keep us on the site longer and to inject ads into our stream, it could instead offer us control over this algorithm.
  • Many nonprofits and civic groups that were initially thrilled about their success in using Facebook to reach people are now despondent as their entries are less and less likely to reach people who “liked” their posts unless they pay Facebook to help boost their updates.
  • What to do? It’s simple: Internet sites should allow their users to be the customers. I would, as I bet many others would, happily pay more than 20 cents per month for a Facebook or a Google that did not track me, upgraded its encryption and treated me as a customer whose preferences and privacy matter.
  • Many people say that no significant number of users will ever pay directly for Internet services. But that is because we are misled by the mantra that these services are free. With growing awareness of the privacy cost of ads, this may well change. Millions of people pay for Netflix despite the fact that pirated copies of many movies are available free. We eventually pay for ads, anyway, as that cost is baked into products we purchase
  • A seamless, secure micropayment system that spreads a few pennies at a time as we browse a social network, up to a preset monthly limit, would alter the whole landscape for the better.
  • we’re not starting from scratch. Micropayment systems that would allow users to spend a few cents here and there, not be so easily tracked by all the Big Brothers, and even allow personalization were developed in the early days of the Internet. Big banks and large Internet platforms didn’t show much interest in this micropayment path, which would limit their surveillance abilities. We can revive it.
  • If even a quarter of Facebook’s 1.5 billion users were willing to pay $1 per month in return for not being tracked or targeted based on their data, that would yield more than $4 billion per year — surely a number worth considering.
  • Mr. Zuckerberg has reportedly spent more than $30 million to buy the homes around his in Palo Alto, Calif., and more than $100 million for a secluded parcel of land in Hawaii. He knows privacy is worth paying for. So he should let us pay a few dollars to protect ours.
15More

Big Think Interview With Nicholas Carr | Nicholas Carr | Big Think - 0 views

  • Neurologically, how does our brain adapt itself to new technologies? Nicholas Carr: A couple of types of adaptations take place in your brain. One is a strengthening of the synaptical connections between the neurons involved in using that instrument, in using that tool. And basically these are chemical – neural chemical changes. So you know, cells in our brain communicate by transmitting electrical signals between them and those electrical signals are actually activated by the exchange of chemicals, neurotransmitters in our synapses. And so when you begin to use a tool, for instance, you have much stronger electrochemical signals being processed in those – through those synaptical connections. And then the second, and even more interesting adaptation is in actual physical changes,anatomical changes. Your neurons, you may grow new neurons that are then recruited into these circuits or your existing neurons may grow new synaptical terminals. And again, that also serves to strengthen the activity in those, in those particular pathways that are being used – new pathways. On the other hand, you know, the brain likes to be efficient and so even as its strengthening the pathways you’re exercising, it’s pulling – it’s weakening the connections in other ways between the cells that supported old ways of thinking or working or behaving, or whatever that you’re not exercising so much.
  • And it was only in around the year 800 or 900 that we saw the introduction of word spaces. And suddenly reading became, in a sense, easier and suddenly you had to arrival of silent reading, which changed the act of reading from just transcription of speech to something that every individual did on their own. And suddenly you had this whole deal of the silent solitary reader who was improving their mind, expanding their horizons, and so forth. And when Guttenberg invented the printing press around 1450, what that served to do was take this new very attentive, very deep form of reading, which had been limited to just, you know, monasteries and universities, and by making books much cheaper and much more available, spread that way of reading out to a much larger mass of audience. And so we saw, for the last 500 years or so, one of the central facts of culture was deep solitary reading.
  • What the book does as a technology is shield us from distraction. The only thinggoing on is the, you know, the progression of words and sentences across page after page and so suddenly we see this immersive kind of very attentive thinking, whether you are paying attention to a story or to an argument, or whatever. And what we know about the brain is the brain adapts to these types of tools.
  • ...12 more annotations...
  • we adapt to the environment of the internet, which is an environment of kind of constant immersion and information and constant distractions, interruptions, juggling lots of messages, lots of bits of information.
  • Because it’s no longer just a matter of personal choice, of personal discipline, though obviously those things are always important, but what we’re seeing and we see this over and over again in the history of technology, is that the technology – the technology of the web, the technology of digital media, gets entwined very, very deeply into social processes, into expectations. So more and more, for instance in our work lives. You know, if our boss and all our colleagues are constantly exchanging messages, constantly checking email on their Blackberry or iPhone or their Droid or whatever, then it becomes very difficult to say, I’m not going to be as connected because you feel like you’re career is going to take a hit.
  • With the arrival – with the transfer now of text more and more onto screens, we see, I think, a new and in some ways more primitive way of reading. In order to take in information off a screen, when you are also being bombarded with all sort of other information and when there links in the text where you have to think even for just a fraction of a second, you know, do I click on this link or not. Suddenly reading again becomes a more cognitively intensive act, the way it was back when there were no spaces between words.
  • If all your friends are planning their social lives through texts and Facebook and Twitter and so forth, then to back away from that means to feel socially isolated. And of course for all people, particularly for young people, there’s kind of nothing worse than feeling socially isolated, that your friends are you know, having these conversations and you’re not involved. So it’s easy to say the solution, which is to, you know, becomes a little bit more disconnected. What’s hard it actually doing that.
  • if you want to change your brain, you change your habits. You change your habits of thinking. And that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking and that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking, to be – to screen out distractions. And that means retreating from digital media and from the web and from Smart Phones and texting and Facebook and Tweeting and everything else.
  • The Thinker was, you know, in a contemplative pose and was concentrating deeply, and wasn’t you know, multi-tasking. And because that is something that, until recently anyway, people always thought was the deepest and most distinctly human way of thinking.
  • we may end up finding that those are actually the most valuable ways of thinking that are available to us as human beings.
  • the ability to pay attention also is very important for our ability to build memories, to transfer information from our short-term memory to our long-term memory. And only when we do that do we weave new information into everything else we have stored in our brains. All the other facts we’ve learned, all the other experiences we’ve had, emotions we’ve felt. And that’s how you build, I think, a rich intellect and a rich intellectual life.
  • On the other hand, there is a cost. We lose – we begin to lose the facilities that we don’t exercise. So adaptation has both a very, very positive side, but also a potentially negative side because ultimately our brain is qualitatively neutral. It doesn’t pare what it’s strengthening or what it’s weakening, it just responds to the way we’re exercising our mind.
  • the book in some ways is the most interesting from our own present standpoint, particularly when we want to think about the way the internet is changing us. It’s interesting to think about how the book changed us.
  • So we become, after the arrival of the printing press in general, more attentive more attuned to contemplative ways of thinking. And that’s a very unnatural way of using our mind. You know, paying attention, filtering out distractions.
  • what we lose is the ability to pay deep attention to one thing for a sustained period of time, to filter out distractions.
15More

Opinion | If You Want to Understand How Dangerous Elon Musk Is, Look Outside America - ... - 0 views

  • Twitter was an intoxicating window into my fascinating new assignment. Long suppressed groups found their voices and social media-driven revolutions began to unfold. Movements against corruption gained steam and brought real change. Outrage over a horrific gang rape in Delhi built a movement to fight an epidemic of sexual violence.
  • “What we didn’t realize — because we took it for granted for so long — is that most people spoke with a great deal of freedom, and completely unconscious freedom,” said Nilanjana Roy, a writer who was part of my initial group of Twitter friends in India. “You could criticize the government, debate certain religious practices. It seems unreal now.”
  • Soon enough, other kinds of underrepresented voices also started to appear on — and then dominate — the platform. As women, Muslims and people from lower castes spoke out, the inevitable backlash came. Supporters of the conservative opposition party, the Bharatiya Janata Party, and their right-wing religious allies felt that they had long been ignored by the mainstream press. Now they had the chance to grab the mic.
  • ...12 more annotations...
  • Viewed from the United States, these skirmishes over the unaccountable power of tech platforms seem like a central battleground of free speech. But the real threat in much of the world is not the policies of social media companies, but of governments.
  • The real question now is if Musk’s commitment to “free speech” extends beyond conservatives in America and to the billions of people in the Global South who rely on the internet for open communication.
  • ndia’s government had demanded that Twitter block tweets and accounts from a variety of journalists, activists and politicians. The company went to court, arguing that these demands went beyond the law and into censorship. Now Twitter’s potential new owner was casting doubt on whether the company should be defying government demands that muzzle freedom of expression.
  • The winning side will not be decided in Silicon Valley or Beijing, the two poles around which debate over free expression on the internet have largely orbited. It will be the actions of governments in capitals like Abuja, Jakarta, Ankara, Brasília and New Delhi.
  • while much of the focus has been on countries like China, which overtly restricts access to huge swaths of the internet, the real war over the future of internet freedom is being waged in what she called “swing states,” big, fragile democracies like India.
  • other governments are passing laws just to increase their power over speech online and to force companies to be an extension of state surveillance.” For example: requiring companies to house their servers locally rather than abroad, which can make them more vulnerable to government surveillance.
  • Across the world, countries are putting in place frameworks that on their face seem designed to combat online abuse and misinformation but are largely used to stifle dissent or enable abuse of the enemies of those in power.
  • it seems that this is actually what he believes. In April, he tweeted: “By ‘free speech’, I simply mean that which matches the law. I am against censorship that goes far beyond the law. If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people.”
  • Musk is either exceptionally naïve or willfully ignorant about the relationship between government power and free speech, especially in fragile democracies.
  • The combination of a rigid commitment to following national laws and a hands-off approach to content moderation is combustible and highly dangerous.
  • Independent journalism is increasingly under threat in India. Much of the mainstream press has been neutered by a mix of intimidation and conflicts of interests created by the sprawling conglomerates and powerful families that control much of Indian media
  • Twitter has historically fought against censorship. Whether that will continue under Musk seems very much a question. The Indian government has reasons to expect friendly treatment: Musk’s company Tesla has been trying to enter the Indian car market for some time, but in May it hit an impasse in negotiations with the government over tariffs and other issues
27More

The Sad Trombone Debate: The RNC Throws in the Towel and Gets Ready to Roll Over for Tr... - 0 views

  • Death to the Internet
  • Yesterday Ben Thompson published a remarkable essay in which he more or less makes the case that the internet is a socially deleterious invention, that it will necessarily get more toxic, and that the best we can hope for is that it gets so bad, so fast, that everyone is shocked into turning away from it.
  • Ben writes the best and most insightful newsletter about technology and he has been, in all the years I’ve read him, a techno-optimist.
  • ...24 more annotations...
  • this is like if Russell Moore came out and said that, on the whole, Christianity turns out to be a bad thing. It’s that big of a deal.
  • Thompson’s case centers around constraints and supply, particularly as they apply to content creation.
  • In the pre-internet days, creating and distributing content was relatively expensive, which placed content publishers—be they newspapers, or TV stations, or movie studios—high on the value chain.
  • The internet reduced distribution costs to zero and this shifted value away from publishers and over to aggregators: Suddenly it was more important to aggregate an audience—a la Google and Facebook—than to be a content creator.
  • Audiences were valuable; content was commoditized.
  • What has alarmed Thompson is that AI has now reduced the cost of creating content to zero.
  • what does the world look like when both the creation and distribution of content are zero?
  • Hellscape
  • We’re headed to a place where content is artificially created and distributed in such a way as to be tailored to a given user’s preferences. Which will be the equivalent of living in a hall of mirrors.
  • What does that mean for news? Nothing good.
  • It doesn’t really make sense to talk about “news media” because there are fundamental differences between publication models that are driven by scale.
  • So the challenges the New York Times face will be different than the challenges that NPR or your local paper face.
  • Two big takeaways:
  • (1) Ad-supported publications will not survive
  • Zero-cost for content creation combined with zero-cost distribution means an infinite supply of content. The more content you have, the more ad space exists—the lower ad prices go.
  • Actually, some ad-supported publications will survive. They just won’t be news. What will survive will be content mills that exist to serve ads specifically matched to targeted audiences.
  • (2) Size is determinative.
  • The New York Times has a moat by dint of its size. It will see the utility of its soft “news” sections decline in value, because AI is going to be better at creating cooking and style content than breaking hard news. But still, the NYT will be okay because it has pivoted hard into being a subscription-based service over the last decade.
  • At the other end of the spectrum, independent journalists should be okay. A lone reporter running a focused Substack who only needs four digits’ worth of subscribers to sustain them.
  • But everything in between? That’s a crapshoot.
  • Technology writers sometimes talk about the contrast between “builders” and “conservers” — roughly speaking, between those who are most animated by what we stand to gain from technology and those animated by what we stand to lose.
  • in our moment the builder and conserver types are proving quite mercurial. On issues ranging from Big Tech to medicine, human enhancement to technologies of governance, the politics of technology are in upheaval.
  • Dispositions are supposed to be basically fixed. So who would have thought that deep blue cities that yesterday were hotbeds of vaccine skepticism would today become pioneers of vaccine passports? Or that outlets that yesterday reported on science and tech developments in reverent tones would today make it their mission to unmask “tech bros”?
  • One way to understand this churn is that the builder and the conserver types each speak to real, contrasting features within human nature. Another way is that these types each pick out real, contrasting features of technology. Focusing strictly on one set of features or the other eventually becomes unstable, forcing the other back into view.
17More

'Meta-Content' Is Taking Over the Internet - The Atlantic - 0 views

  • Jenn, however, has complicated things by adding an unexpected topic to her repertoire: the dangers of social media. She recently spoke about disengaging from it for her well-being; she also posted an Instagram Story about the risks of ChatGPT
  • and, in none other than a YouTube video, recommended Neil Postman’s Amusing Ourselves to Death, a seminal piece of media critique from 1985 that denounces television’s reduction of life to entertainment.
  • (Her other book recommendations included Stolen Focus, by Johann Hari, and Recapture the Rapture, by Jamie Wheal.)
  • ...14 more annotations...
  • Social-media platforms are “preying on your insecurities; they’re preying on your temptations,” Jenn explained to me in an interview that shifted our parasocial connection, at least for an hour, to a mere relationship. “And, you know, I do play a role in this.” Jenn makes money through aspirational advertising, after all—a familiar part of any influencer’s job.
  • She’s pro–parasocial relationships, she explains to the camera, but only if we remain aware that we’re in one. “This relationship does not replace existing friendships, existing relationships,” she emphasizes. “This is all supplementary. Like, it should be in addition to your life, not a replacement.” I sat there watching her talk about parasocial relationships while absorbing the irony of being in one with her.
  • The open acknowledgment of social media’s inner workings, with content creators exposing the foundations of their content within the content itself, is what Alice Marwick, an associate communications professor at the University of North Carolina at Chapel Hill, described to me as “meta-content.”
  • Meta-content can be overt, such as the vlogger Casey Neistat wondering, in a vlog, if vlogging your life prevents you from being fully present in it;
  • But meta-content can also be subtle: a vlogger walking across the frame before running back to get the camera. Or influencers vlogging themselves editing the very video you’re watching, in a moment of space-time distortion.
  • Viewers don’t seem to care. We keep watching, fully accepting the performance. Perhaps that’s because the rise of meta-content promises a way to grasp authenticity by acknowledging artifice; especially in a moment when artifice is easier to create than ever before, audiences want to know what’s “real” and what isn’
  • “The idea of a space where you can trust no sources, there’s no place to sort of land, everything is put into question, is a very unsettling, unsatisfying way to live.
  • So we continue to search for, as Murray observes, the “agreed-upon things, our basic understandings of what’s real, what’s true.” But when the content we watch becomes self-aware and even self-critical, it raises the question of whether we can truly escape the machinations of social media. Maybe when we stare directly into the abyss, we begin to enjoy its company.
  • “The difference between BeReal and the social-media giants isn’t the former’s relationship to truth but the size and scale of its deceptions.” BeReal users still angle their camera and wait to take their daily photo at an aesthetic time of day. The snapshots merely remind us how impossible it is to stop performing online.
  • Jenn’s concern over the future of the internet stems, in part, from motherhood. She recently had a son, Lennon (whose first birthday party I watched on YouTube), and worries about the digital world he’s going to inherit.
  • Back in the age of MySpace, she had her own internet friends and would sneak out to parking lots at 1 a.m. to meet them in real life: “I think this was when technology was really used as a tool to connect us.” Now, she explained, it’s beginning to ensnare us. Posting content online is no longer a means to an end so much as the end itself.
  • We used to view influencers’ lives as aspirational, a reality that we could reach toward. Now both sides acknowledge that they’re part of a perfect product that the viewer understands is unattainable and the influencer acknowledges is not fully real.
  • “I forgot to say this to her in the interview, but I truly think that my videos are less about me and more of a reflection of where you are currently … You are kind of reflecting on your own life and seeing what resonates [with] you, and you’re discarding what doesn’t. And I think that’s what’s beautiful about it.”
  • meta-content is fundamentally a compromise. Recognizing the delusion of the internet doesn’t alter our course within it so much as remind us how trapped we truly are—and how we wouldn’t have it any other way.
29More

Can truth survive this president? An honest investigation. - The Washington Post - 0 views

  • in the summer of 2002, long before “fake news” or “post-truth” infected the vernacular, one of President George W. Bush’s top advisers mocked a journalist for being part of the “reality-based community.” Seeking answers in reality was for suckers, the unnamed adviser explained. “We’re an empire now, and when we act, we create our own reality.”
  • This was the hubris and idealism of a post-Cold War, pre-Iraq War superpower: If you exert enough pressure, events will bend to your will.
  • the deceit emanating from the White House today is lazier, more cynical. It is not born of grand strategy or ideology; it is impulsive and self-serving. It is not arrogant, but shameless.
  • ...26 more annotations...
  • Bush wanted to remake the world. President Trump, by contrast, just wants to make it up as he goes along
  • Through all their debates over who is to blame for imperiling truth (whether Trump, postmodernism, social media or Fox News), as well as the consequences (invariably dire) and the solutions (usually vague), a few conclusions materialize, should you choose to believe them.
  • There is a pattern and logic behind the dishonesty of Trump and his surrogates; however, it’s less multidimensional chess than the simple subordination of reality to political and personal ambition
  • Trump’s untruth sells best precisely when feelings and instincts overpower facts, when America becomes a safe space for fabrication.
  • Rand Corp. scholars Jennifer Kavanagh and Michael D. Rich point to the Gilded Age, the Roaring Twenties and the rise of television in the mid-20th century as recent periods of what they call “Truth Decay” — marked by growing disagreement over facts and interpretation of data; a blurring of lines between opinion, fact and personal experience; and diminishing trust in once-respected sources of information.
  • In eras of truth decay, “competing narratives emerge, tribalism within the U.S. electorate increases, and political paralysis and dysfunction grow,”
  • Once you add the silos of social media as well as deeply polarized politics and deteriorating civic education, it becomes “nearly impossible to have the types of meaningful policy debates that form the foundation of democracy.”
  • To interpret our era’s debasement of language, Kakutani reflects perceptively on the World War II-era works of Victor Klemperer, who showed how the Nazis used “words as ‘tiny doses of arsenic’ to poison and subvert the German culture,” and of Stefan Zweig, whose memoir “The World of Yesterday” highlights how ordinary Germans failed to grasp the sudden erosion of their freedoms.
  • Kakutani calls out lefty academics who for decades preached postmodernism and social constructivism, which argued that truth is not universal but a reflection of relative power, structural forces and personal vantage points.
  • postmodernists rejected Enlightenment ideals as “vestiges of old patriarchal and imperialist thinking,” Kakutani writes, paving the way for today’s violence against fact in politics and science.
  • “dumbed-down corollaries” of postmodernist thought have been hijacked by Trump’s defenders, who use them to explain away his lies, inconsistencies and broken promises.
  • intelligent-design proponents and later climate deniers drew from postmodernism to undermine public perceptions of evolution and climate change. “Even if right-wing politicians and other science deniers were not reading Derrida and Foucault, the germ of the idea made its way to them: science does not have a monopoly on the truth,
  • McIntyre quotes at length from mea culpas by postmodernist and social constructivist writers agonizing over what their theories have wrought, shocked that conservatives would use them for nefarious purposes
  • pro-Trump troll and conspiracy theorist Mike Cernovich , who helped popularize the “Pizzagate” lie, has forthrightly cited his unlikely influences. “Look, I read postmodernist theory in college,” Cernovich told the New Yorker in 2016. “If everything is a narrative, then we need alternatives to the dominant narrative. I don’t seem like a guy who reads [Jacques] Lacan, do I?
  • When truth becomes malleable and contestable regardless of evidence, a mere tussle of manufactured narratives, it becomes less about conveying facts than about picking sides, particularly in politics.
  • In “On Truth,” Cambridge University philosopher Simon Blackburn writes that truth is attainable, if at all, “only at the vanishing end points of enquiry,” adding that, “instead of ‘facts first’ we may do better if we think of ‘enquiry first,’ with the notion of fact modestly waiting to be invited to the feast afterward.
  • He is concerned, but not overwhelmingly so, about the survival of truth under Trump. “Outside the fevered world of politics, truth has a secure enough foothold,” Blackburn writes. “Perjury is still a serious crime, and we still hope that our pilots and surgeons know their way about.
  • Kavanaugh and Rich offer similar consolation: “Facts and data have become more important in most other fields, with political and civil discourse being striking exceptions. Thus, it is hard to argue that the world is truly ‘post-fact.’ ”
  • McIntyre argues persuasively that our methods of ascertaining truth — not just the facts themselves — are under attack, too, and that this assault is especially dangerous.
  • Ideologues don’t just disregard facts they disagree with, he explains, but willingly embrace any information, however dubious, that fits their agenda. “This is not the abandonment of facts, but a corruption of the process by which facts are credibly gathered and reliably used to shape one’s beliefs about reality. Indeed, the rejection of this undermines the idea that some things are true irrespective of how we feel about them.”
  • “It is hardly a depressing new phenomenon that people’s beliefs are capable of being moved by their hopes, grievances and fears,” Blackburn writes. “In order to move people, objective facts must become personal beliefs.” But it can’t work — or shouldn’t work — in reverse.
  • More than fearing a post-truth world, Blackburn is concerned by a “post-shame environment,” in which politicians easily brush off their open disregard for truth.
  • it is human nature to rationalize away the dissonance. “Why get upset by his lies, when all politicians lie?” Kakutani asks, distilling the mind-set. “Why get upset by his venality, when the law of the jungle rules?”
  • So any opposition is deemed a witch hunt, or fake news, rigged or just so unfair. Trump is not killing the truth. But he is vandalizing it, constantly and indiscriminately, diminishing its prestige and appeal, coaxing us to look away from it.
  • the collateral damage includes the American experiment.
  • “One of the most important ways to fight back against post-truth is to fight it within ourselves,” he writes, whatever our particular politics may be. “It is easy to identify a truth that someone else does not want to see. But how many of us are prepared to do this with our own beliefs? To doubt something that we want to believe, even though a little piece of us whispers that we do not have all the facts?”
8More

The Choose-Your-Own-News Adventure - The New York Times - 0 views

  • some new twist on the modern media sphere’s rush to give you exactly what you want when you want it.
  • No matter how far the experiment goes, Netflix is again in step with the national zeitgeist. After all, there are algorithms for streaming music services like Spotify, for Facebook’s news feed and for Netflix’s own program menu, working to deliver just what you like while filtering out whatever might turn you off and send you away — the sorts of data-driven honey traps that are all the talk at the South by Southwest Interactive Festival going on here through this week.
  • “You used to be a consumer of reality, and now you’re a designer of reality.”
  • ...4 more annotations...
  • It started with President Trump’s Twitter posts accusing former President Barack Obama of having wiretapped his phones at Trump Tower.
  • The proof, you would have heard him say, was already out there in the mainstream media — what with a report on the website Heat Street saying that the Federal Bureau of Investigation had secured a warrant to investigate ties between people in Mr. Trump’s campaign and Russia, and articles in The New York Times, in The Washington Post and elsewhere about intelligence linking people in Mr. Trump’s campaign to Russia, some of it from wiretaps.
  • You could throw on the goggles, become a bird and fly around. If virtual reality can allow a human to become a bird, why couldn’t it allow you to live more fully in your own political reality — don the goggles and go live full time in the adventure of your choosing: A, B or C.
  • Just watch out for that wall you’re about to walk into IRL (in real life). Or, hey, don’t — knock yourself out.
  •  
    This new design reminds me of how the internet is limiting us in our comfort zone. Although in theory, there is almost infinite amount of information on the internet, we can only get a very small proportion of it. And people tends to read the information that support their idea or fit their interests. So the news servers start to design system that only provide readers with what they want to see or like to see. It does not do good to diversify people's mind as what internet should be doing. In the quote, Dan Wagner said: "you're a designer of reality", but I interpret this as we are the designer of our own reality. This will only isolate people from each other. Without living in the same reality, people won't have real communication, so I think this new design does have cons. --Sissi (3/14/2017)
16More

Opinion | The Only Answer Is Less Internet - The New York Times - 0 views

  • In our age of digital connection and constantly online life, you might say that two political regimes are evolving, one Chinese and one Western
  • The first regime is one in which your every transaction can be fed into a system of ratings and rankings
  • in which what seem like merely personal mistakes can cost you your livelihood and reputation, even your ability to hail a car or book a reservation
  • ...13 more annotations...
  • It’s one in which notionally private companies cooperate with the government to track dissidents and radicals and censor speech
  • ne in which your fellow citizens act as enforcers of the ideological consensus, making an example of you for comments you intended only for your friends
  • one in which even the wealth and power of your overlords can't buy privacy.
  • The second regime is the one they’re building in the People’s Republic of China.
  • Beijing has treated the darkest episodes of “Black Mirror” as a how-to guide for social control and subjugation
  • Unlike China’s system, our emerging post-privacy order is not (for now) totalitarian; its impositions are more decentralized and haphazard, more circumscribed and civilized, less designed and more evolved, more random in the punishments inflicted and the rules enforced.
  • our system cannot help recreating features of the Chinese order, because the way that we live on the internet leaves us naked before power in a radical new way.
  • the Western order in the internet age might be usefully described as a “liberalism with some police-state characteristics.” Those characteristics are shaped and limited by our political heritage of rights and individualism. But there is still plainly an authoritarian edge, a gentle “pink police state” aspect, to the new world that online life creates.
  • apart from the high-minded and the paranoid, privacy per se is not a major issue in our politics
  • for those who object inherently to our new nakedness, regard the earthquakes as too high a price for Amazon’s low prices, or fear what an Augustus or a Robespierre might someday do with all this architecture, the best hope for a partial restoration of privacy has to involve more than just an anxiety about privacy alone.
  • It requires a more general turn against the virtual, in which fears of digital nakedness are just one motivator among many — the political piece of a cause that’s also psychological, intellectual, aesthetic and religious.
  • This is the hard truth suggested by our online experience so far: That a movement to restore privacy must be, at some level, a movement against the internet
  • Not a pure Luddism, but a movement for limits, for internet-free spaces, for zones of enforced pre-virtual reality (childhood and education above all), for social conventions that discourage career-destroying tweets and crotch shots by encouraging us to put away our iPhones.
28More

How YouTube Drives People to the Internet's Darkest Corners - WSJ - 0 views

  • YouTube is the new television, with more than 1.5 billion users, and videos the site recommends have the power to influence viewpoints around the world.
  • Those recommendations often present divisive, misleading or false content despite changes the site has recently made to highlight more-neutral fare, a Wall Street Journal investigation found.
  • Behind that growth is an algorithm that creates personalized playlists. YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch.
  • ...25 more annotations...
  • People cumulatively watch more than a billion YouTube hours daily world-wide, a 10-fold increase from 2012
  • After the Journal this week provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem.
  • When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints.
  • Such recommendations play into concerns about how social-media sites can amplify extremist voices, sow misinformation and isolate users in “filter bubbles”
  • Unlike Facebook Inc. and Twitter Inc. sites, where users see content from accounts they choose to follow, YouTube takes an active role in pushing information to users they likely wouldn’t have otherwise seen.
  • “The editorial policy of these new platforms is to essentially not have one,”
  • “That sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win.’ But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and people are being gamed.”
  • YouTube has been tweaking its algorithm since last autumn to surface what its executives call “more authoritative” news source
  • YouTube last week said it is considering a design change to promote relevant information from credible news sources alongside videos that push conspiracy theories.
  • The Journal investigation found YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.
  • YouTube engineered its algorithm several years ago to make the site “sticky”—to recommend videos that keep users staying to watch still more, said current and former YouTube engineers who helped build it. The site earns money selling ads that run before and during videos.
  • YouTube’s algorithm tweaks don’t appear to have changed how YouTube recommends videos on its home page. On the home page, the algorithm provides a personalized feed for each logged-in user largely based on what the user has watched.
  • There is another way to calculate recommendations, demonstrated by YouTube’s parent, Alphabet Inc.’s Google. It has designed its search-engine algorithms to recommend sources that are authoritative, not just popular.
  • Google spokeswoman Crystal Dahlen said that Google improved its algorithm last year “to surface more authoritative content, to help prevent the spread of blatantly misleading, low-quality, offensive or downright false information,” adding that it is “working with the YouTube team to help share learnings.”
  • In recent weeks, it has expanded that change to other news-related queries. Since then, the Journal’s tests show, news searches in YouTube return fewer videos from highly partisan channels.
  • YouTube’s recommendations became even more effective at keeping people on the site in 2016, when the company began employing an artificial-intelligence technique called a deep neural network that makes connections between videos that humans wouldn’t. The algorithm uses hundreds of signals, YouTube says, but the most important remains what a given user has watched.
  • Using a deep neural network makes the recommendations more of a black box to engineers than previous techniques,
  • “We don’t have to think as much,” he said. “We’ll just give it some raw data and let it figure it out.”
  • To better understand the algorithm, the Journal enlisted former YouTube engineer Guillaume Chaslot, who worked on its recommendation engine, to analyze thousands of YouTube’s recommendations on the most popular news-related queries
  • Mr. Chaslot created a computer program that simulates the “rabbit hole” users often descend into when surfing the site. In the Journal study, the program collected the top five results to a given search. Next, it gathered the top three recommendations that YouTube promoted once the program clicked on each of those results. Then it gathered the top three recommendations for each of those promoted videos, continuing four clicks from the original search.
  • The first analysis, of November’s top search terms, showed YouTube frequently led users to divisive and misleading videos. On the 21 news-related searches left after eliminating queries about entertainment, sports and gaming—such as “Trump,” “North Korea” and “bitcoin”—YouTube most frequently recommended these videos:
  • The algorithm doesn’t seek out extreme videos, they said, but looks for clips that data show are already drawing high traffic and keeping people on the site. Those videos often tend to be sensationalist and on the extreme fringe, the engineers said.
  • Repeated tests by the Journal as recently as this week showed the home page often fed far-right or far-left videos to users who watched relatively mainstream news sources, such as Fox News and MSNBC.
  • Searching some topics and then returning to the home page without doing a new search can produce recommendations that push users toward conspiracy theories even if they seek out just mainstream sources.
  • After searching for “9/11” last month, then clicking on a single CNN clip about the attacks, and then returning to the home page, the fifth and sixth recommended videos were about claims the U.S. government carried out the attacks. One, titled “Footage Shows Military Plane hitting WTC Tower on 9/11—13 Witnesses React”—had 5.3 million views.
32More

The Class Politics of Instagram Face - Tablet Magazine - 0 views

  • by approaching universality, Instagram Face actually secured its role as an instrument of class distinction—a mark of a certain kind of woman. The women who don’t mind looking like others, or the conspicuousness of the work they’ve had done
  • Instagram Face goes with implants, middle-aged dates and nails too long to pick up the check. Batting false eyelashes, there in the restaurant it orders for dinner all the food groups of nouveau riche Dubai: caviar, truffle, fillers, foie gras, Botox, bottle service, bodycon silhouettes. The look, in that restaurant and everywhere, has reached a definite status. It’s the girlfriend, not the wife.
  • Does cosmetic work have a particular class? It has a price tag, which can amount to the same thing, unless that price drops low enough.
  • ...29 more annotations...
  • Before the introduction of Botox and hyaluronic acid dermal fillers in 2002 and 2003, respectively, aesthetic work was serious, expensive. Nose jobs and face lifts required general anesthesia, not insignificant recovery time, and cost thousands of dollars (in 2000, a facelift was $5,416 on average, and a rhinoplasty $4,109, around $9,400 and $7,000 adjusted).
  • In contrast, the average price of a syringe of hyaluronic acid filler today is $684, while treating, for example, the forehead and eyes with Botox will put you out anywhere from $300 to $600
  • We copied the beautiful and the rich, not in facsimile, but in homage.
  • In 2018, use of Botox and fillers was up 18% and 20% from five years prior. Philosophies of prejuvenation have made Botox use jump 22% among 22- to 37-year-olds in half a decade as well. By 2030, global noninvasive aesthetic treatments are predicted to triple.
  • The trouble is that a status symbol, without status, is common.
  • Beauty has always been exclusive. When someone strikes you as pretty, it means they are something that everyone else is not.
  • It’s a zero-sum game, as relative as our morals. Naturally, we hoard of beauty what we can. It’s why we call grooming tips “secrets.”
  • Largely the secrets started with the wealthy, who possess the requisite money and leisure to spare on their appearances
  • Botox and filler only accelerated a trend that began in the ’70s and ’80s and is just now reaching its saturation point.
  • we didn’t have the tools for anything more than emulation. Fake breasts and overdrawn lips only approximated real ones; a birthmark drawn with pencil would always be just that.
  • Instagram Face, on the other hand, distinguishes itself by its sheer reproducibility. Not only because of those new cosmetic technologies, which can truly reshape features, at reasonable cost and with little risk.
  • built in to the whole premise of reversible, low-stakes modification is an indefinite flux, and thus a lack of discretion.
  • Instagram Face has replicated outward, with trendsetters giving up competing with one another in favor of looking eerily alike. And obviously it has replicated down.
  • Eva looks like Eva. If she has procedures in common with Kim K, you couldn’t tell. “I look at my features and I think long and hard of how I can, without looking different and while keeping as natural as possible, make them look better and more proportional. I’m against everything that is too invasive. My problem with Instagram Face is that if you want to look like someone else, you should be in therapy.”
  • natural looks have always been, and still are, more valuable than artificial ones. Partly because of our urge to legitimize in any way we can the advantages we have over other people. Hotness is a class struggle.
  • As more and more women post videos of themselves eating, sleeping, dressing, dancing, and Only-Fanning online, in a logical bid for economic ascendance, the women who haven’t needed to do that gain a new status symbol.
  • Privacy. A life which is not a ticketed show. An intimacy that does not admit advertisers. A face that does not broadcast its insecurity, or the work undergone to correct it.
  • Upper class, private women get discrete work done. The differences aren’t in the procedures themselves—they’re the same—but in disposition
  • Eva, who lives between central London, Geneva, and the south of France, says: “I do stuff, but none of the stuff I do is at all in my head associated with Instagram Face. Essentially you do similar procedures, but the end goal is completely different. Because they are trying to get the result of looking like another human being, and I’m just beautifying myself.”
  • But the more rapidly it replicates, and the clearer our manuals for quick imitation become, the closer we get to singularity—that moment Kim Kardashian fears unlike any other: the moment when it becomes unclear whether we’re copying her, or whether she is copying us.
  • what he restores is complicated and yet not complicated at all. It’s herself, the fingerprint of her features. Her aura, her presence and genealogy, her authenticity in space and time.
  • Dr. Taktouk’s approach is “not so formulaic.” He aims to give his patients the “better versions of themselves.” “It’s not about trying to be anyone else,” he says, “or creating a conveyor belt of patients. It’s about working with your best features, enhancing them, but still looking like you.”
  • “Vulgar” says that in pursuing indistinguishability, women have been duped into another punishing divide. “Vulgar” says that the subtlety of his work is what signals its special class—and that the women who’ve obtained Instagram Face for mobility’s sake have unwittingly shut themselves out of it.
  • While younger women are dissolving their gratuitous work, the 64-year-old Madonna appeared at the Grammy Awards in early February, looking so tragically unlike herself that the internet launched an immediate postmortem.
  • The folly of Instagram Face is that in pursuing a bionic ideal, it turns cosmetic technology away from not just the reality of class and power, but also the great, poignant, painful human project of trying to reverse time. It misses the point of what we find beautiful: that which is ephemeral, and can’t be reproduced
  • Age is just one of the hierarchies Instagram Face can’t topple, in the history of women striving versus the women already arrived. What exactly have they arrived at?
  • Youth, temporarily. Wealth. Emotional security. Privacy. Personal choices, like cosmetic decisions, which are not so public, and do not have to be defended as empowered, in the defeatist humiliation of our times
  • Maybe they’ve arrived at love, which for women has never been separate from the things I’ve already mentioned.
  • I can’t help but recall the time I was chatting with a plastic surgeon. I began to point to my features, my flaws. I asked her, “What would you do to me, if I were your patient?” I had many ideas. She gazed at me, and then noticed my ring. “Nothing,” she said. “You’re already married.”
7More

A Super-Simple Way to Understand the Net Neutrality Debate - NYTimes.com - 0 views

  • there is a really simple way of thinking of the debate over net neutrality: Is access to the Internet more like access to electricity, or more like cable television service?
  • For all the technical complexity of generating electricity and distributing it to millions of people, the economic arrangement is very simple: I give them money. They give me electricity. I do with it what I will.
  • Comcast, my cable provider, offers me a menu of packages from which I might choose, each with a different mix of channels. It goes through long and sometimes arduous negotiations with the owners of those cable channels and has a different business arrangement with each of them. The details of those arrangements are opaque to me as the consumer; all I know is that I can get the movie package for X dollars a month or the sports package for Y dollars and so on.
  • ...4 more annotations...
  • One theory of the case, and the one that the Obama administration embraced Monday, is that the Internet is like electricity. It is fundamental to the 21st century economy, as essential to functioning in modern society as electricity. It is a public utility. “We cannot allow Internet service providers (ISPs) to restrict the best access or to pick winners and losers in the online marketplace for services and ideas,” the president said in his written statement.
  • just as your electric utility has no say in how you use the electricity they sell you, the Internet should be a reliable way to access content produced by anyone, regardless of whether they have any special business arrangement with the utility.
  • Those arguing against net neutrality, most significantly the cable companies, say the Internet will be a richer experience if the profit motive applies, if they can negotiate deals with major content providers (the equivalent of cable channels) so that Netflix or Hulu or other streaming services that use huge bandwidth have to pay for the privilege.
  • It would also give your Internet provider considerably more economic leverage. It would, in the non-net-neutrality world, be free to throttle the speed with which you could access services that don’t pay up, or block sites entirely, as surely as you cannot watch a cable channel that your cable provider chooses not to offer (perhaps because of a dispute with the channel over fees).
18More

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
20More

Understanding What's Wrong With Facebook | Talking Points Memo - 0 views

  • to really understand the problem with Facebook we need to understand the structural roots of that problem, how much of it is baked into the core architecture of the site and its very business model
  • much of it is inherent in the core strategies of the post-2000, second wave Internet tech companies that now dominate our information space and economy.
  • Facebook is an ingenious engine for information and ideational manipulation.
  • ...17 more annotations...
  • Good old fashioned advertising does that to a degree. But Facebook is much more powerful, adaptive and efficient.
  • Facebook is designed to do specific things. It’s an engine to understand people’s minds and then manipulate their thinking.
  • Those tools are refined for revenue making but can be used for many other purposes. That makes it ripe for misuse and bad acting.
  • The core of all second wave Internet commerce operations was finding network models where costs grow mathematically and revenues grow exponentially.
  • The network and its dominance is the product and once it takes hold the cost inputs remained constrained while the revenues grow almost without limit.
  • Facebook is best understood as a fantastically profitable nuclear energy company whose profitability is based on dumping the waste on the side of the road and accepting frequent accidents and explosions as inherent to the enterprise.
  • That’s why these companies employ so few people relative to scale and profitability.
  • That’s why there’s no phone support for Google or Facebook or Twitter. If half the people on the planet are ‘customers’ or users that’s not remotely possible.
  • The core economic model requires doing all of it on the cheap. Indeed, what Zuckerberg et al. have created with Facebook is so vast that the money required not to do it on the cheap almost defies imagination.
  • Facebook’s core model and concept requires not taking responsibility for what others do with the engine created to drive revenue.
  • It all amounts to a grand exercise in socializing the externalities and keeping all the revenues for the owners.
  • Here’s a way to think about it. Nuclear power is actually incredibly cheap. The fuel is fairly plentiful and easy to pull out of the ground. You set up a little engine and it generates energy almost without limit. What makes it ruinously expensive is managing the externalities – all the risks and dangers, the radiation, accidents, the constant production of radioactive waste.
  • managing or distinguishing between legitimate and bad-acting uses of the powerful Facebook engine is one that would require huge, huge investments of money and armies of workers to manage
  • But back to Facebook. The point is that they’ve created a hugely powerful and potentially very dangerous machine
  • The core business model is based on harvesting the profits from the commercial uses of the machine and using algorithms and very, very limited personnel (relative to scale) to try to get a handle on the most outrageous and shocking abuses which the engine makes possible.
  • Zuckerberg may be a jerk and there really is a culture of bad acting within the organization. But it’s not about him being a jerk. Replace him and his team with non-jerks and you’d still have a similar core problem.
  • To manage the potential negative externalities, to take some responsibility for all the dangerous uses the engine makes possible would require money the owners are totally unwilling and in some ways are unable to spend.
« First ‹ Previous 41 - 60 of 336 Next › Last »
Showing 20 items per page